Skip to main content

Chain of Prompts

The Chain of Prompts (CoP) plays a crucial role in LUCI reasoning, especially those specialized in healthcare. CoP is a method where an AI agent solves complex tasks by breaking them into sequential, logical steps, often involving multiple prompts to guide its reasoning process. In healthcare, the accuracy and reliability of decision-making are important, and a well-structured chain of prompts ensures thoroughness and clarity in this process.

This Python script demonstrates how to use the Luci package's create_master_prompt function to generate a master prompt from a list of individual prompts. It enables information sharing between the prompts and activates search functionality.

Features

  • Model Name: The script allows you to specify a model (e.g., "cerina").
  • Connected Prompts: By setting connected=True, information flows between the prompts, meaning earlier results can influence later prompts.
  • Search Integration: By enabling search=True, the agent can retrieve information via search queries during the prompt generation.

Code Explanation

import asyncio
from Luci import create_master_prompt

async def main():
# Specify the model to be used for generating the master prompt
model_name = "cerina"

# Define your list of prompts
prompts = [
"Generate the recent progress of revmaxx llc",
"Explain the benefits of using AI in healthcare."
]

# Generate master prompt using the standalone function
master_prompt = await create_master_prompt(
model_name=model_name, # Model used for prompt generation
prompts=prompts, # List of individual prompts
connected=True, # Enable information sharing between prompts
search=True # Enable search functionality for retrieving external data
)

print("Generated Master Prompt:", master_prompt)

# Run the async main function
if __name__ == "__main__":
asyncio.run(main())